语音可理解性评估在患有病理语音疾病的患者的治疗中起着重要作用。需要自动和客观的措施,以帮助治疗师进行传统的主观和劳动密集型评估。在这项工作中,我们研究了一种新的方法,该方法是使用从健康的参考和病理扬声器获得的平行话语对的分离潜在语音表示中的差异来获得这种度量的。使用每个扬声器的所有可用话语,在英语数据库上进行了英语数据库,显示出高和显着的相关值(r = -0.9),具有主观的可理解性指标,而在四个不同的参考扬声器对中仅具有最小的偏差(+-0.01) 。我们还通过考虑每个扬声器的话语少得多,在1000次迭代中偏离1000次迭代的 +-0.02偏离 +-0.02)也证明了稳健性。我们的结果之一是最早表明可以使用删除的语音表示形式用于自动病理语音可理解性评估,从而产生了参考扬声器对不变方法,适用于仅有几个话语的场景。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner. To this end, we represent both the bidirectional reflectance distribution function (BRDF) and the signed distance function (SDF) by multi-layer perceptrons. In order to obtain both the surface shape and its reflectance properties, we revert to a differentiable volume renderer with a physically based illumination model that allows us to decouple reflectance and lighting. This physical model takes into account the effect of the camera's point spread function thereby enabling a reconstruction of shape and material in a super-resolution quality. Experimental validation confirms that SupeRVol achieves state of the art performance in terms of inverse rendering quality. It generates reconstructions that are sharper than the individual input images, making this method ideally suited for 3D modeling from low-resolution imagery.
translated by 谷歌翻译
Quantifying the perceptual similarity of two images is a long-standing problem in low-level computer vision. The natural image domain commonly relies on supervised learning, e.g., a pre-trained VGG, to obtain a latent representation. However, due to domain shift, pre-trained models from the natural image domain might not apply to other image domains, such as medical imaging. Notably, in medical imaging, evaluating the perceptual similarity is exclusively performed by specialists trained extensively in diverse medical fields. Thus, medical imaging remains devoid of task-specific, objective perceptual measures. This work answers the question: Is it necessary to rely on supervised learning to obtain an effective representation that could measure perceptual similarity, or is self-supervision sufficient? To understand whether recent contrastive self-supervised representation (CSR) may come to the rescue, we start with natural images and systematically evaluate CSR as a metric across numerous contemporary architectures and tasks and compare them with existing methods. We find that in the natural image domain, CSR behaves on par with the supervised one on several perceptual tests as a metric, and in the medical domain, CSR better quantifies perceptual similarity concerning the experts' ratings. We also demonstrate that CSR can significantly improve image quality in two image synthesis tasks. Finally, our extensive results suggest that perceptuality is an emergent property of CSR, which can be adapted to many image domains without requiring annotations.
translated by 谷歌翻译
Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, their limitations fueled a growing interest in topology-aware methods, which aim to recover the correct topology of the segmented structures. However, so far, none of the existing approaches achieve a spatially correct matching between the topological features of ground truth and prediction. In this work, we propose the first topologically and feature-wise accurate metric and loss function for supervised image segmentation, which we term Betti matching. We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute the Betti matching of images. We show that the Betti matching error is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the Betti matching loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance. Our code is available in https://github.com/nstucki/Betti-matching.
translated by 谷歌翻译
统计形状建模旨在捕获给定种群中发生的解剖结构的形状变化。形状模型用于许多任务,例如形状重建和图像分割,但也可以塑造生成和分类。现有的形状先验需要训练示例之间的密集对应,或者缺乏鲁棒性和拓扑保证。我们提出了FlowSM,这是一种新型的形状建模方法,它可以学习形状变异性,而无需在训练实例之间密集的对应关系。它依赖于连续变形流的层次结构,该层次由神经网络参数化。我们的模型优于远端股骨和肝脏在提供表现力和稳健形状方面的最先进方法。我们表明,新兴的潜在表示通过将健康与病理形状分开来歧视。最终,我们从部分数据中证明了其对两个形状重建任务的有效性。我们的源代码公开可用(https://github.com/davecasp/flowssm)。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
人类使用未知的相似性函数在未标记的数据集中天生测量实例之间的距离。距离指标只能作为相似实例信息检索相似性的代理。从人类注释中学习良好的相似性功能可以提高检索的质量。这项工作使用深度度量学习来从很少的大型足球轨迹数据集中学习这些用户定义的相似性功能。我们将基于熵的活跃学习方法从三重矿山开采进行了最新的工作,以收集易于招募的人,但仍来自人类参与者提供信息的注释,并利用它们来训练深度卷积网络,以概括为看不见的样本。我们的用户研究表明,与以前依赖暹罗网络的深度度量学习方法相比,我们的方法提高了信息检索的质量。具体而言,我们通过分析参与者的响应效率来阐明被动抽样启发式方法和主动学习者的优势和缺点。为此,我们收集准确性,算法时间的复杂性,参与者的疲劳和时间响应,定性自我评估和陈述以及混合膨胀注释者的影响及其对模型性能和转移学习的一致性。
translated by 谷歌翻译
光学相干断层扫描血管造影(OCTA)可以非侵入地对眼睛的循环系统进行图像。为了可靠地表征视网膜脉管系统,有必要自动从这些图像中提取定量指标。这种生物标志物的计算需要对血管进行精确的语义分割。但是,基于深度学习的分割方法主要依赖于使用体素级注释的监督培训,这是昂贵的。在这项工作中,我们提出了一条管道,以合成具有本质上匹配的地面真实标签的大量逼真的八颗图像。从而消除了需要手动注释培训数据的需求。我们提出的方法基于两个新的组成部分:1)基于生理的模拟,该模拟对各种视网膜血管丛进行建模和2)基于物理学的图像增强套件,这些图像增强量模拟了八八章图像采集过程,包括典型文物。在广泛的基准测试实验中,我们通过成功训练视网膜血管分割算法来证明合成数据的实用性。在我们方法的竞争性定量和优越的定性性能的鼓励下,我们认为它构成了一种多功能工具,可以推进对八章图像的定量分析。
translated by 谷歌翻译